85 research outputs found
Full Resolution Image Compression with Recurrent Neural Networks
This paper presents a set of full-resolution lossy image compression methods
based on neural networks. Each of the architectures we describe can provide
variable compression rates during deployment without requiring retraining of
the network: each network need only be trained once. All of our architectures
consist of a recurrent neural network (RNN)-based encoder and decoder, a
binarizer, and a neural network for entropy coding. We compare RNN types (LSTM,
associative LSTM) and introduce a new hybrid of GRU and ResNet. We also study
"one-shot" versus additive reconstruction architectures and introduce a new
scaled-additive framework. We compare to previous work, showing improvements of
4.3%-8.8% AUC (area under the rate-distortion curve), depending on the
perceptual metric used. As far as we know, this is the first neural network
architecture that is able to outperform JPEG at image compression across most
bitrates on the rate-distortion curve on the Kodak dataset images, with and
without the aid of entropy coding.Comment: Updated with content for CVPR and removed supplemental material to an
external link for size limitation
Multi-Realism Image Compression with a Conditional Generator
By optimizing the rate-distortion-realism trade-off, generative compression
approaches produce detailed, realistic images, even at low bit rates, instead
of the blurry reconstructions produced by rate-distortion optimized models.
However, previous methods do not explicitly control how much detail is
synthesized, which results in a common criticism of these methods: users might
be worried that a misleading reconstruction far from the input image is
generated. In this work, we alleviate these concerns by training a decoder that
can bridge the two regimes and navigate the distortion-realism trade-off. From
a single compressed representation, the receiver can decide to either
reconstruct a low mean squared error reconstruction that is close to the input,
a realistic reconstruction with high perceptual quality, or anything in
between. With our method, we set a new state-of-the-art in distortion-realism,
pushing the frontier of achievable distortion-realism pairs, i.e., our method
achieves better distortions at high realism and better realism at low
distortion than ever before
Improved Lossy Image Compression with Priming and Spatially Adaptive Bit Rates for Recurrent Networks
We propose a method for lossy image compression based on recurrent,
convolutional neural networks that outperforms BPG (4:2:0 ), WebP, JPEG2000,
and JPEG as measured by MS-SSIM. We introduce three improvements over previous
research that lead to this state-of-the-art result. First, we show that
training with a pixel-wise loss weighted by SSIM increases reconstruction
quality according to several metrics. Second, we modify the recurrent
architecture to improve spatial diffusion, which allows the network to more
effectively capture and propagate image information through the network's
hidden state. Finally, in addition to lossless entropy coding, we use a
spatially adaptive bit allocation algorithm to more efficiently use the limited
number of bits to encode visually complex image regions. We evaluate our method
on the Kodak and Tecnick image sets and compare against standard codecs as well
recently published methods based on deep neural networks
Towards a Semantic Perceptual Image Metric
We present a full reference, perceptual image metric based on VGG-16, an
artificial neural network trained on object classification. We fit the metric
to a new database based on 140k unique images annotated with ground truth by
human raters who received minimal instruction. The resulting metric shows
competitive performance on TID 2013, a database widely used to assess image
quality assessments methods. More interestingly, it shows strong responses to
objects potentially carrying semantic relevance such as faces and text, which
we demonstrate using a visualization technique and ablation experiments. In
effect, the metric appears to model a higher influence of semantic context on
judgments, which we observe particularly in untrained raters. As the vast
majority of users of image processing systems are unfamiliar with Image Quality
Assessment (IQA) tasks, these findings may have significant impact on
real-world applications of perceptual metrics
Neural Video Compression using GANs for Detail Synthesis and Propagation
We present the first neural video compression method based on generative
adversarial networks (GANs). Our approach significantly outperforms previous
neural and non-neural video compression methods in a user study, setting a new
state-of-the-art in visual quality for neural methods. We show that the GAN
loss is crucial to obtain this high visual quality. Two components make the GAN
loss effective: we i) synthesize detail by conditioning the generator on a
latent extracted from the warped previous reconstruction to then ii) propagate
this detail with high-quality flow. We find that user studies are required to
compare methods, i.e., none of our quantitative metrics were able to predict
all studies. We present the network design choices in detail, and ablate them
with user studies.Comment: First two authors contributed equally. ECCV Camera ready versio
- …